home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: comp.lang.c
- Path: new-news.sprintlink.net!eskimo!scs
- From: scs@eskimo.com (Steve Summit)
- Subject: Re: 16bit vs. 32bit
- X-Nntp-Posting-Host: eskimo.com
- Message-ID: <Dp3EH0.I92@eskimo.com>
- Sender: news@eskimo.com (News User Id)
- Organization: schmorganization
- References: <4iui27$egk@news.netam.net> <DovvHG.3DK@eskimo.com> <315845E6.64FC@oc.com>
- Date: Sat, 30 Mar 1996 18:10:12 GMT
-
- In article <315845E6.64FC@oc.com>, Larry Weiss <lfw@oc.com> wrote:
- > Steve Summit wrote:
- >> One of the whole points of using a high-level language is to
- >> insulate you from low-level machine implementation details such
- >> as the sizes of things in bits. If you find yourself needing to
- >> know the sizes of things in bits, someone screwed up.
- >
- > How do you know how large your arrays could be without some
- > consideration of the executing machine(s), perhaps done by
- > inspecting size_t ?
-
- Depends on how optimally you're trying to do things.
-
- If you're a nutso flaming Standard-thumping language lawyer, you
- know that the Standard doesn't guarantee that individual objects
- can be larger than 32k, and you just don't ever try to allocate
- large, contiguous arrays. In fact, lists or trees or sparse
- arrays or (my latest favorite) lists of "chunks" are almost
- always better data structures to use, anyway.
-
- If efficiency or sanity (e.g. bitmap graphics processing) demand
- large, contiguous arrays, it's true that you start caring more
- about exact sizes, but in those cases you're often less portable
- or strictly-conforming, anyway (especially if you're trying to
- optimize).
-
- Steve Summit
- scs@eskimo.com
-